Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Image tampering forensics network based on residual feedback and self-attention
Guolong YUAN, Yujin ZHANG, Yang LIU
Journal of Computer Applications    2023, 43 (9): 2925-2931.   DOI: 10.11772/j.issn.1001-9081.2022081283
Abstract281)   HTML17)    PDF (1998KB)(142)       Save

The existing multi-tampering type image forgery detection algorithms using noise features often can not effectively detect the feature difference between tampered areas and non-tampered areas, especially for copy-move tampering type. To this end, a dual-stream image tampering forensics network fusing residual feedback and self-attention mechanism was proposed to detect tampering artifacts such as unnatural edges of RGB pixels and local noise inconsistence respectively through two streams. Firstly, in the encoder stage, multiple dual residual units integrating residual feedback were used to extract relevant tampering features to obtain coarse feature maps. Secondly, further feature reinforcement was performed on the coarse feature maps by the improved self-attention mechanism. Thirdly, the mutual corresponding shallow features of encoder and deep features of decoder were fused. Finally, the final features of tempering extracted by the two streams were fused in series, and then the pixel-level localization of the tampered area was realized through a special convolution operation. Experimental results show that the F1 score and Area Under Curve (AUC) value of the proposed network on COVERAGE dataset are better than those of the comparison networks. The F1 score of the proposed network is 9.8 and 7.7 percentage points higher than that of TED-Net (Two-stream Encoder-Decoder Network) on NIST16 and Columbia datasets, and the AUC increases by 1.1 and 6.5 percentage points, respectively. The proposed network achieves good results in copy-move tampering type detection, and is also suitable for other tampering type detection. At the same time, the proposed network can locate the tampered area at pixel level accurately, and its detection performance is superior to the comparison networks.

Table and Figures | Reference | Related Articles | Metrics
U-shaped feature pyramid network for image inpainting forensics
Wanli SHEN, Yujin ZHANG, Wan HU
Journal of Computer Applications    2023, 43 (2): 545-551.   DOI: 10.11772/j.issn.1001-9081.2021122107
Abstract264)   HTML17)    PDF (1450KB)(164)       Save

Image inpainting is a common method of image tampering. Image inpainting methods based on deep learning can generate more complex structures and even new objects, making image inpainting forensics more challenging. Therefore, an end-to-end U-shaped Feature Pyramid Network (FPN) was proposed for image inpainting forensics. Firstly, multi-scale feature extraction was performed through the from-top-to-down VGG16 module, and then the from-bottom-to-up feature pyramid architecture was used to carry out up-sampling of the fused feature maps, and a U-shaped structure was formed by the overall process. Next, the global and local attention mechanisms were combined to highlight the inpainting traces. Finally, the fusion loss function was used to improve the prediction rate of the repaired area. Experimental results show that the proposed method achieves an average F1-score and Intersection over Union (IoU) value of 0.791 9 and 0.747 2 respectively on various deep inpainting datasets. Compared with the existing Localization of Diffusion-based Inpainting (LDI), Patch-based Convolutional Neural Network (Patch-CNN) and High-Pass Fully Convolutional Network (HP-FCN) methods, the proposed method has better generalization ability, and also has stronger robustness to JPEG compression.

Table and Figures | Reference | Related Articles | Metrics